74 research outputs found
Computer-aided diagnosis of lung nodule using gradient tree boosting and Bayesian optimization
We aimed to evaluate computer-aided diagnosis (CADx) system for lung nodule
classification focusing on (i) usefulness of gradient tree boosting (XGBoost)
and (ii) effectiveness of parameter optimization using Bayesian optimization
(Tree Parzen Estimator, TPE) and random search. 99 lung nodules (62 lung
cancers and 37 benign lung nodules) were included from public databases of CT
images. A variant of local binary pattern was used for calculating feature
vectors. Support vector machine (SVM) or XGBoost was trained using the feature
vectors and their labels. TPE or random search was used for parameter
optimization of SVM and XGBoost. Leave-one-out cross-validation was used for
optimizing and evaluating the performance of our CADx system. Performance was
evaluated using area under the curve (AUC) of receiver operating characteristic
analysis. AUC was calculated 10 times, and its average was obtained. The best
averaged AUC of SVM and XGBoost were 0.850 and 0.896, respectively; both were
obtained using TPE. XGBoost was generally superior to SVM. Optimal parameters
for achieving high AUC were obtained with fewer numbers of trials when using
TPE, compared with random search. In conclusion, XGBoost was better than SVM
for classifying lung nodules. TPE was more efficient than random search for
parameter optimization.Comment: 29 pages, 4 figure
Boosting Radiology Report Generation by Infusing Comparison Prior
Recent transformer-based models have made significant strides in generating
radiology reports from chest X-ray images. However, a prominent challenge
remains: these models often lack prior knowledge, resulting in the generation
of synthetic reports that mistakenly reference non-existent prior exams. This
discrepancy can be attributed to a knowledge gap between radiologists and the
generation models. While radiologists possess patient-specific prior
information, the models solely receive X-ray images at a specific time point.
To tackle this issue, we propose a novel approach that leverages a rule-based
labeler to extract comparison prior information from radiology reports. This
extracted comparison prior is then seamlessly integrated into state-of-the-art
transformer-based models, enabling them to produce more realistic and
comprehensive reports. Our method is evaluated on English report datasets, such
as IU X-ray and MIMIC-CXR. The results demonstrate that our approach surpasses
baseline models in terms of natural language generation metrics. Notably, our
model generates reports that are free from false references to non-existent
prior exams, setting it apart from previous models. By addressing this
limitation, our approach represents a significant step towards bridging the gap
between radiologists and generation models in the domain of medical report
generation.Comment: Accepted at ACL 2023, BioNLP Worksho
Development of pericardial fat count images using a combination of three different deep-learning models
Rationale and Objectives: Pericardial fat (PF), the thoracic visceral fat
surrounding the heart, promotes the development of coronary artery disease by
inducing inflammation of the coronary arteries. For evaluating PF, this study
aimed to generate pericardial fat count images (PFCIs) from chest radiographs
(CXRs) using a dedicated deep-learning model.
Materials and Methods: The data of 269 consecutive patients who underwent
coronary computed tomography (CT) were reviewed. Patients with metal implants,
pleural effusion, history of thoracic surgery, or that of malignancy were
excluded. Thus, the data of 191 patients were used. PFCIs were generated from
the projection of three-dimensional CT images, where fat accumulation was
represented by a high pixel value. Three different deep-learning models,
including CycleGAN, were combined in the proposed method to generate PFCIs from
CXRs. A single CycleGAN-based model was used to generate PFCIs from CXRs for
comparison with the proposed method. To evaluate the image quality of the
generated PFCIs, structural similarity index measure (SSIM), mean squared error
(MSE), and mean absolute error (MAE) of (i) the PFCI generated using the
proposed method and (ii) the PFCI generated using the single model were
compared.
Results: The mean SSIM, MSE, and MAE were as follows: 0.856, 0.0128, and
0.0357, respectively, for the proposed model; and 0.762, 0.0198, and 0.0504,
respectively, for the single CycleGAN-based model.
Conclusion: PFCIs generated from CXRs with the proposed model showed better
performance than those with the single model. PFCI evaluation without CT may be
possible with the proposed method
Leveraging Token-Based Concept Information and Data Augmentation in Few-Resource NER: ZuKyo-EN at the NTCIR-16 Real-MedNLP task
In this paper, we discuss our contribution to the NII Testbeds and Community for Information Access Research (NTCIR) - 16 Real- MedNLP shared task. Our team (ZuKyo) participated in the English subtask: Few-resource Named Entity Recognition. The main challenge in this low-resource task was a low number of training documents annotated with a high number of tags and attributes. For our submissions, we used different general and domain-specific transfer learning approaches in combination with multiple data augmentation methods. In addition, we experimented with models enriched with biomedical concepts encoded as token-based input feature
Approach for Named Entity Recognition and Case Identification Implemented by ZuKyo-JA Sub-team at the NTCIR-16 Real-MedNLP Task
In this NTCIR-16 Real-MedNLP shared task paper, we present the methods of the ZuKyo-JA subteam for solving the Japanese part of Subtask1 and Subtask3 (Subtask1-CR-JA, Subtask1-RR- JA, Subtask3-RR-JA). Our solution is based on a sliding- window approach using a Japanese BERT pre-trained masked- language model., which was used as a common architecture for addressing the specific subtasks. We additionally present a method that makes extensive use of medical knowledge for the same case identification subtask3-RR-JA
Temporal subtraction CT with nonrigid image registration improves detection of bone metastases by radiologists: results of a large-scale observer study
To determine whether temporal subtraction (TS) CT obtained with non-rigid image registration improves detection of various bone metastases during serial clinical follow-up examinations by numerous radiologists. Six board-certified radiologists retrospectively scrutinized CT images for patients with history of malignancy sequentially. These radiologists selected 50 positive and 50 negative subjects with and without bone metastases, respectively. Furthermore, for each subject, they selected a pair of previous and current CT images satisfying predefined criteria by consensus. Previous images were non-rigidly transformed to match current images and subtracted from current images to automatically generate TS images. Subsequently, 18 radiologists independently interpreted the 100 CT image pairs to identify bone metastases, both without and with TS images, with each interpretation separated from the other by an interval of at least 30 days. Jackknife free-response receiver operating characteristics (JAFROC) analysis was conducted to assess observer performance. Compared with interpretation without TS images, interpretation with TS images was associated with a significantly higher mean figure of merit (0.710 vs. 0.658; JAFROC analysis, P = 0.0027). Mean sensitivity at lesion-based was significantly higher for interpretation with TS compared with that without TS (46.1% vs. 33.9%; P = 0.003). Mean false positive count per subject was also significantly higher for interpretation with TS than for that without TS (0.28 vs. 0.15; P < 0.001). At the subject-based, mean sensitivity was significantly higher for interpretation with TS images than that without TS images (73.2% vs. 65.4%; P = 0.003). There was no significant difference in mean specificity (0.93 vs. 0.95; P = 0.083). TS significantly improved overall performance in the detection of various bone metastases
Stromal area differences with epithelial-mesenchymal transition gene changes in conjunctival and orbital mucosa-associated lymphoid tissue lymphoma
PurposeTo examine the molecular biological differences between conjunctival mucosa-associated lymphoid tissue (MALT) lymphoma and orbital MALT lymphoma in ocular adnexa lymphoma.MethodsObservational case series. A total of 129 consecutive, randomized cases of ocular adnexa MALT lymphoma diagnosed histopathologically between 2008 and 2020.Total RNA was extracted from formalin-fixed paraffin-embedded tissue from ocular adnexa MALT lymphoma, and RNA-sequencing was performed. Orbital MALT lymphoma gene expression was compared with that of conjunctival MALT lymphoma. Gene set (GS) analysis detecting for gene set cluster was performed in RNA-sequence. Related proteins were further examined by immunohistochemical staining. In addition, artificial segmentation image used to count stromal area in HE images.ResultsGS analysis showed differences in expression in 29 GS types in primary orbital MALT lymphoma (N=5,5, FDR q-value <0.25). The GS with the greatest difference in expression was the GS of epithelial-mesenchymal transition (EMT). Based on this GS change, immunohistochemical staining was added using E-cadherin as an epithelial marker and vimentin as a mesenchymal marker for EMT. There was significant staining of vimentin in orbital lymphoma (P<0.01, N=129) and of E-cadherin in conjunctival lesions (P=0.023, N=129). Vimentin staining correlated with Ann Arbor staging (1 versus >1) independent of age and sex on multivariate analysis (P=0.004). Stroma area in tumor were significant difference(P<0.01).ConclusionGS changes including EMT and stromal area in tumor were used to demonstrate the molecular biological differences between conjunctival MALT lymphoma and orbital MALT lymphoma in ocular adnexa lymphomas
- …